摘要 :
As scientific research becomes more data intensive, there is an increasing need for scalable, reliable, and high performance storage systems. Such data repositories must provide both data archival services and rich metadata, and c...
展开
As scientific research becomes more data intensive, there is an increasing need for scalable, reliable, and high performance storage systems. Such data repositories must provide both data archival services and rich metadata, and cleanly integrate with large scale computing resources. ROARS is a hybrid approach to distributed storage that provides both large, robust, scalable storage and efficient rich metadata queries for scientific applications. In this paper, we present the design and implementation of ROARS, focusing primarily on the challenge of maintaining data integrity across long time scales. We evaluate the performance of ROARS on a storage cluster, comparing to the Hadoop distributed file system and a centralized file server. We observe that ROARS has read and write performance that scales with the number of storage nodes, and integrity checking that scales with the size of the largest node. We demonstrate the ability of ROARS to function correctly through multiple system failures and reconfigurations. ROARS has been in production use for over three years as the primary data repository for a biometrics research lab at the University of Notre Dame.
收起
摘要 :
Electric power distribution systems, and particularly those with overhead circuits, operate radially but as the topology of the systems is meshed, therefore a set of circuits needs to be disconnected. In this context the problem o...
展开
Electric power distribution systems, and particularly those with overhead circuits, operate radially but as the topology of the systems is meshed, therefore a set of circuits needs to be disconnected. In this context the problem of optimal reconfiguration of a distribution system is formulated with the goal of finding a radial topology for the operation of the system. This paper utilizes experimental tests and preliminary theoretical analysis to show that radial topology is one of the worst topologies to use if the goal is to minimize power losses in a power distribution system. For this reason, it is important to initiate a theoretical and practical discussion on whether it is worthwhile to operate a distribution system in a radial form. This topic is becoming increasingly important within the modern operation of electrical systems, which requires them to operate as efficiently as possible, utilizing all available resources to improve and optimize the operation of electric power systems. Experimental tests demonstrate the importance of this issue.
收起
摘要 :
The industry's recent challenges in a dramatically changing environment can occasionally result in calls for local governments to consider municipalization of local power resources. But simply changing the ownership structure of s...
展开
The industry's recent challenges in a dramatically changing environment can occasionally result in calls for local governments to consider municipalization of local power resources. But simply changing the ownership structure of shareholder-owned electric companies and turning them into public utilities offer a false sense of progress, when what is truly needed is investment in new technologies, the nation's transmission grid and distribution systems, and new approaches.
收起
摘要 :
This paper proposes a non-cooperative game based technique to replicate data objects across a distributed system of multiple servers in order to reduce user perceived Web access delays. In the proposed technique computational agen...
展开
This paper proposes a non-cooperative game based technique to replicate data objects across a distributed system of multiple servers in order to reduce user perceived Web access delays. In the proposed technique computational agents represent servers and compete with each other to optimize the performance of their servers. The optimality of a non-cooperative game is typically described by Nash equilibrium, which is based on spontaneous and non-deterministic strategies. However, Nash equilibrium may or may not guarantee system-wide performance. Furthermore, there can be multiple Nash equilibria, making it difficult to decide which one is the best. In contrast, the proposed technique uses the notion of pure Nash equilibrium, which if achieved, guarantees stable optimal performance. In the proposed technique, agents use deterministic strategies that work in conjunction with their self-interested nature but ensure system-wide performance enhancement. In general, the existence of a pure Nash equilibrium is hard to achieve, but we prove the existence of such equilibrium in the proposed technique. The proposed technique is also experimentally compared against some well-known conventional replica allocation methods, such as branch and bound, greedy, and genetic algorithms.
收起
摘要 :
This paper describes a set of facilities for programming distributed transactions over replicated files which are accessed by primary key. The files are located on several computers communicated by a network. Each site has the set...
展开
This paper describes a set of facilities for programming distributed transactions over replicated files which are accessed by primary key. The files are located on several computers communicated by a network. Each site has the set of GNU dbm (Gdbm) routines for local file management [15]. Above this platform we have built an interface and a set of services for distributed transaction programming. The resulting programming environment, "DGDBM", offers transparency in relation to data distribution and data replication, giving a centralized vision to the programmer. It assures the functions of management of distributed transactions like as failure recovery, mutual consistency between copies and concurrence control. DGDBM is an useful support for distributed application programming over replicated files in UNIX networks and it is available as an API (application programming interface) for the C programmer. This paper describes the services offered by DGDBM to the programmer, the architecture of the system, the adopted solutions for distributed transaction management, the general aspects of design and implementation and the perspectives and planned extensions for this project.
收起
摘要 :
This paper presents a new planning optimization model for distribution substation siting, sizing, and timing. The proposed model involves using linear functions to express the total cost function. The developed model includes diff...
展开
This paper presents a new planning optimization model for distribution substation siting, sizing, and timing. The proposed model involves using linear functions to express the total cost function. The developed model includes different electrical constrai
收起
摘要 :
Process migration is the act of transferring a process between two machines. It enables dynamic load distribution, fault resilience, eased system administration, and data access locality. Despite these goals and ongoing research e...
展开
Process migration is the act of transferring a process between two machines. It enables dynamic load distribution, fault resilience, eased system administration, and data access locality. Despite these goals and ongoing research efforts, migration has not achieved widespread use. With the increasing deployment of distributed systems in general, and distributed operating systems in particular, process migration is again receiving more attention in both research and product development.
收起
摘要 :
The employment of network-based technologies, such as the WWW and middleware platforms, significantly increased the complexity of distributed application, as well as the Quality-of-Service requirements for the underlying network. ...
展开
The employment of network-based technologies, such as the WWW and middleware platforms, significantly increased the complexity of distributed application, as well as the Quality-of-Service requirements for the underlying network. Distributed application modelling is nowadays far more demanding than network modelling, where numerous solutions are already employed in commercial tools. We introduce a simulation modelling approach for distributed systems, giving emphasis to distributed applications. The proposed scheme enables the in-depth description of application functionality, the accurate estimation of network load and the extension of existing application models to support further customisation. It supports widely employed architectural models, such as the client–server model and its variations, and is based on multi-layer decomposition. Application functionality is described using pre-defined operations, which can be further decomposed into simpler ones, ultimately resulting into elementary actions corresponding to primitive network operations, such as transfer and processing. Even if realisation of this scheme proves to be time demanding, individual application modelling is performed with consistency and considerably lower overhead. The distributed system simulation environment built to realise the proposed modelling scheme and a case study indicating key features of the overall approach are also presented.
收起
摘要 :
This paper investigates unreliable failure detectors with restricted properties, in the context of asynchronous distributed systems made up of n processes where at most f may crash. "Restricted" means that the completeness and the...
展开
This paper investigates unreliable failure detectors with restricted properties, in the context of asynchronous distributed systems made up of n processes where at most f may crash. "Restricted" means that the completeness and the accuracy properties defining a failure detector class are not required to involve all the correct processes but only k and k' of them, respectively (k are involved in the completeness property, and k' in the accuracy property). These restricted properties define the classes R(k,k') and R(k,k') of unreliable failure detectors. A reduction protocol that transforms a restricted failure detector into its non-restricted counterpart is presented. It is shown that the reduction requires k+ k'>n (to be safe) and max (k,k')≤n-f(to be live). So, when these two conditions are satisfied, R(k,k') an R(k,k') are equivalent to the Chandra-Toueg's failure detector classes S and S, respectively. This theoretical transformation is also interesting from a practical point of view because the restricted properties are usually easier to satisfy than their non-restricted counterparts in asynchronous distributed systems.
收起
摘要 :
Deployment of high-penetration photovoltaic (PV) power is expected to have a range of effects - both positive and negative on the distribution grid. The magnitude of these effects may vary greatly depending upon feeder topology, c...
展开
Deployment of high-penetration photovoltaic (PV) power is expected to have a range of effects - both positive and negative on the distribution grid. The magnitude of these effects may vary greatly depending upon feeder topology, climate, PV penetration level, and other factors. In this paper we present a simulation study of eight representative distribution feeders in three California climates at PV penetration levels up to 100%, supported by a unique database of distributed PV generation data that enables us to capture the impact of PV variability on feeder voltage and voltage regulating equipment. We find that feeder location (i.e. climate) has a stronger impact than feeder type on the incidence of reverse power flow, reductions in peak loading and the presence of voltage excursions. On the other hand, we find that feeder characteristics have a stronger impact than location on the magnitude of loss reduction and changes in voltage regulator operations. We find that secondary distribution transformer aging is negligibly affected in almost all scenarios. (C) 2016 Elsevier Ltd. All rights reserved.
收起